Decayed Markov Chain Monte Carlo for Interactive POMDPs

نویسندگان

  • Yanlin Han
  • Piotr Gmytrasiewicz
چکیده

To act optimally in a partially observable, stochastic and multi-agent environment, an autonomous agent needs to maintain a belief of the world at any given time. An extension of partially observable Markov decision processes (POMDPs), called interactive POMDPs (I-POMDPs), provides a principled framework for planning and acting in such settings. I-POMDP augments the POMDP beliefs by including models of other agents in the state space, which forms a hierarchical belief structure that represents an agent’s belief about the physical state, belief about the other agents and their beliefs about others’ beliefs. This nested hierarchy results in a dramatic increase of the belief space complexity. In order to perform belief update in such settings, we propose a new approximating method that utilizes decayed Markov chain Monte Carlo (D-MCMC). For problems of various complexities, we show that our approach effectively mitigates the belief space complexity and competes with other Monte Carlo sampling algorithms for multi-agent systems, such as interactive particle filter (I-PF). We also give comparisons on their accuracy and efficiency, and then suggests applicable scenarios for each algorithms.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Monte Carlo Sampling Methods for Approximating Interactive POMDPs

Partially observable Markov decision processes (POMDPs) provide a principled framework for sequential planning in uncertain single agent settings. An extension of POMDPs to multiagent settings, called interactive POMDPs (I-POMDPs), replaces POMDP belief spaces with interactive hierarchical belief systems which represent an agent’s belief about the physical world, about beliefs of other agents, ...

متن کامل

Decayed MCMC Filtering

Filtering-estimating the state of a partially ob­ servable Markov process from a sequence of observations-is one of the most widely stud­ ied problems in control theory, AI, and com­ putational statistics. Exact computation of the posterior distribution is generally intractable for large discrete systems and for nonlinear con­ tinuous systems, so a good deal of effort has gone into developing r...

متن کامل

Learning Others' Intentional Models in Multi-Agent Settings Using Interactive POMDPs

Interactive partially observable Markov decision processes (I-POMDPs) provide a principled framework for planning and acting in a partially observable, stochastic and multiagent environment, extending POMDPs to multi-agent settings by including models of other agents in the state space and forming a hierarchical belief structure. In order to predict other agents’ actions using I-POMDP, we propo...

متن کامل

Learning in POMDPs with Monte Carlo Tree Search

The POMDP is a powerful framework for reasoning under outcome and information uncertainty, but constructing an accurate POMDP model is difficult. Bayes-Adaptive Partially Observable Markov Decision Processes (BA-POMDPs) extend POMDPs to allow the model to be learned during execution. BA-POMDPs are a Bayesian RL approach that, in principle, allows for an optimal trade-off between exploitation an...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016